20 research outputs found

    Dataflow Programming Paradigms for Computational Chemistry Methods

    Get PDF
    The transition to multicore and heterogeneous architectures has shaped the High Performance Computing (HPC) landscape over the past decades. With the increase in scale, complexity, and heterogeneity of modern HPC platforms, one of the grim challenges for traditional programming models is to sustain the expected performance at scale. By contrast, dataflow programming models have been growing in popularity as a means to deliver a good balance between performance and portability in the post-petascale era. This work introduces dataflow programming models for computational chemistry methods, and compares different dataflow executions in terms of programmability, resource utilization, and scalability. This effort is driven by computational chemistry applications, considering that they comprise one of the driving forces of HPC. In particular, many-body methods, such as Coupled Cluster methods (CC), which are the gold standard to compute energies in quantum chemistry, are of particular interest for the applied chemistry community. On that account, the latest development for CC methods is used as the primary vehicle for this research, but our effort is not limited to CC and can be applied across other application domains. Two programming paradigms for expressing CC methods into a dataflow form, in order to make them capable of utilizing task scheduling systems, are presented. Explicit dataflow, is the programming model where the dataflow is explicitly specified by the developer, is contrasted with implicit dataflow, where a task scheduling runtime derives the dataflow. An abstract model is derived to explore the limits of the different dataflow programming paradigms

    Evaluation of Dataflow Programming Models for Electronic Structure Theory

    Get PDF
    International audienceDataflow programming models have been growing in popularity as a means to deliver a good balance between performance and portability in the post-petascale era. In this paper we evaluate different dataflow programming models for electronic structure methods and compare them in terms of programmability, resource utilization, and scalability. In particular, we evaluate two programming paradigms for expressing scientific applications in a dataflow form: (1) explicit dataflow, where the dataflow is specified explicitly by the developer, and (2) implicit dataflow, where a task scheduling runtime derives the dataflow using per-task data-access information embedded in a serial program. We discuss our findings and present a thorough experimental analysis using methods from the NWCHEM quantum chemistry application as our case study, and OPENMP, STARPU and PARSEC as the task-based runtimes that enable the different forms of dataflow execution. Furthermore, we derive an abstract model to explore the limits of the different dataflow programming paradigms

    SI2-SSE: PAPI Unifying Layer for Software-Defined Events (PULSE)

    No full text
    PULSE builds on the latest Performance API (PAPI) project and extends it with software-defined events (SDE) that originate from the HPC software stack and are currently treated as black boxes (i.e., communication libraries, math libraries, task-based runtime systems, applications).<br>The objective is to enable monitoring of both types of performance events---hardware- and software-related events---in a uniform way, through one consistent PAPI interface. Therefore, 3rd-party tools and application developers have to handle only a single hook to PAPI to access all hardware performance counters in a system, including the new software-defined events.<br

    SI2-SSE: PAPI Unifying Layer for Software-Defined Events (PULSE)

    No full text
    PULSE builds on the latest Performance API (PAPI) project and extends it with software-defined events (SDE) that originate from the HPC software stack and are currently treated as black boxes (i.e., communication libraries, math libraries, task-based runtime systems, applications).<br>The objective is to enable monitoring of both types of performance events---hardware- and software-related events---in a uniform way, through one consistent PAPI interface. Therefore, 3rd-party tools and application developers have to handle only a single hook to PAPI to access all hardware performance counters in a system, including the new software-defined events

    High Performance Computing: ISC High Performance 2019 International Workshops, Frankfurt, Germany, June 16-20, 2019, Revised Selected Papers

    No full text
    This book constitutes the refereed proceedings of the 34th International Conference on High Performance Computing, ISC High Performance 2019, held in Frankfurt/Main, Germany, in June 2019. The 17 revised full papers presented were carefully reviewed and selected from 70 submissions. The papers cover a broad range of topics such as next-generation high performance components; exascale systems; extreme-scale applications; HPC and advanced environmental engineering projects; parallel ray tracing - visualization at its best; blockchain technology and cryptocurrency; parallel processing in life science; quantum computers/computing; what's new with cloud computing for HPC; parallel programming models for extreme-scale computing; workflow management; machine learning and big data analytics; and deep learning and HPC
    corecore